288 research outputs found

    Essays In Corporate Finance

    Get PDF
    This dissertation studies two questions in corporate finance: 1) Does knowledge sharing affect innovation? and 2) How do profit sharing and loss sharing affect the choice of underwriting fees and offer prices in the IPO market? In the first chapter, I investigate the impact of knowledge sharing on innovation using the staggered adoption of the Uniform Trade Secrets Act as a plausibly exogenous source of variation in inter-firm information flow. I find that innovation becomes less efficient when information is more fragmented. To overcome the problem of limited informal knowledge exchange, companies are more likely to acquire technology in strategic alliances or through merger and acquisitions. I argue that the decrease in innovation is unlikely to be a result of substitution from patenting to ``padlocking by showing that when information flow is more restricted in a state, the innovation level of companies in that state is not affected; but that of the competitors of firms in that state declines. In the second chapter, we model share flotation, starting with the standard contract that assigns all profits above the offer price to investors, and all losses below to the underwriter. We then add profit and loss sharing to the model, and allow the issuer to set the fee and the underwriter to set the price in the initial public offerings market. However, participants deviate in practice, such that investors share some of their profits, and some of the underwriter\u27s losses. We find that profit sharing transfers wealth from issuers to underwriters without affecting the offer price, whereas loss sharing makes both the issuer and underwriter better off, while increasing the offer price. Empirical estimation indicates minimal profit sharing but substantial loss sharing

    The effect of integrated reporting quality on cost of capital: A comparison between developed countries and developing countries

    Get PDF
    Due to frequent corporate scandals, corporate disclosure evolved another new form, integrated reporting. Since integrated reporting was introduced, it attracted much attention in the field of academy. Some studies have investigated integrated reporting and cost of capital. But only a few papers assessed the quality of integrated reporting, and no research did the comparison between different countries. Thus, this paper intends to examine the effect of integrated reporting quality on cost of capital and investigate whether such effect show differences in developed countries and developing countries. The results prove that integrated reporting quality has negative relationship with cost of capital; Such a relationship is more significant in developing countries than developed countries. Therefore, the enhancement of integrated reporting quality is an innovative measure to decline cost of capital. And this measure is more applicable to developing countries

    The effect of integrated reporting quality on cost of capital: A comparison between developed countries and developing countries

    Get PDF
    Due to frequent corporate scandals, corporate disclosure evolved another new form, integrated reporting. Since integrated reporting was introduced, it attracted much attention in the field of academy. Some studies have investigated integrated reporting and cost of capital. But only a few papers assessed the quality of integrated reporting, and no research did the comparison between different countries. Thus, this paper intends to examine the effect of integrated reporting quality on cost of capital and investigate whether such effect show differences in developed countries and developing countries. The results prove that integrated reporting quality has negative relationship with cost of capital; Such a relationship is more significant in developing countries than developed countries. Therefore, the enhancement of integrated reporting quality is an innovative measure to decline cost of capital. And this measure is more appliable to developing countries

    Dense Video Object Captioning from Disjoint Supervision

    Full text link
    We propose a new task and model for dense video object captioning -- detecting, tracking, and captioning trajectories of all objects in a video. This task unifies spatial and temporal understanding of the video, and requires fine-grained language description. Our model for dense video object captioning is trained end-to-end and consists of different modules for spatial localization, tracking, and captioning. As such, we can train our model with a mixture of disjoint tasks, and leverage diverse, large-scale datasets which supervise different parts of our model. This results in noteworthy zero-shot performance. Moreover, by finetuning a model from this initialization, we can further improve our performance, surpassing strong image-based baselines by a significant margin. Although we are not aware of other work performing this task, we are able to repurpose existing video grounding datasets for our task, namely VidSTG and VLN. We show our task is more general than grounding, and models trained on our task can directly be applied to grounding by finding the bounding box with the maximum likelihood of generating the query sentence. Our model outperforms dedicated, state-of-the-art models for spatial grounding on both VidSTG and VLN

    How can objects help action recognition?

    Full text link
    Current state-of-the-art video models process a video clip as a long sequence of spatio-temporal tokens. However, they do not explicitly model objects, their interactions across the video, and instead process all the tokens in the video. In this paper, we investigate how we can use knowledge of objects to design better video models, namely to process fewer tokens and to improve recognition accuracy. This is in contrast to prior works which either drop tokens at the cost of accuracy, or increase accuracy whilst also increasing the computation required. First, we propose an object-guided token sampling strategy that enables us to retain a small fraction of the input tokens with minimal impact on accuracy. And second, we propose an object-aware attention module that enriches our feature representation with object information and improves overall accuracy. Our resulting framework achieves better performance when using fewer tokens than strong baselines. In particular, we match our baseline with 30%, 40%, and 60% of the input tokens on SomethingElse, Something-something v2, and Epic-Kitchens, respectively. When we use our model to process the same number of tokens as our baseline, we improve by 0.6 to 4.2 points on these datasets.Comment: CVPR 202

    UDTIRI: An Open-Source Road Pothole Detection Benchmark Suite

    Full text link
    It is seen that there is enormous potential to leverage powerful deep learning methods in the emerging field of urban digital twins. It is particularly in the area of intelligent road inspection where there is currently limited research and data available. To facilitate progress in this field, we have developed a well-labeled road pothole dataset named Urban Digital Twins Intelligent Road Inspection (UDTIRI) dataset. We hope this dataset will enable the use of powerful deep learning methods in urban road inspection, providing algorithms with a more comprehensive understanding of the scene and maximizing their potential. Our dataset comprises 1000 images of potholes, captured in various scenarios with different lighting and humidity conditions. Our intention is to employ this dataset for object detection, semantic segmentation, and instance segmentation tasks. Our team has devoted significant effort to conducting a detailed statistical analysis, and benchmarking a selection of representative algorithms from recent years. We also provide a multi-task platform for researchers to fully exploit the performance of various algorithms with the support of UDTIRI dataset.Comment: Database webpage: https://www.udtiri.com/, Kaggle webpage: https://www.kaggle.com/datasets/jiahangli617/udtir

    Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations

    Full text link
    This paper reexamines the research on out-of-distribution (OOD) robustness in the field of NLP. We find that the distribution shift settings in previous studies commonly lack adequate challenges, hindering the accurate evaluation of OOD robustness. To address these issues, we propose a benchmark construction protocol that ensures clear differentiation and challenging distribution shifts. Then we introduce BOSS, a Benchmark suite for Out-of-distribution robustneSS evaluation covering 5 tasks and 20 datasets. Based on BOSS, we conduct a series of experiments on pre-trained language models for analysis and evaluation of OOD robustness. First, for vanilla fine-tuning, we examine the relationship between in-distribution (ID) and OOD performance. We identify three typical types that unveil the inner learning mechanism, which could potentially facilitate the forecasting of OOD robustness, correlating with the advancements on ID datasets. Then, we evaluate 5 classic methods on BOSS and find that, despite exhibiting some effectiveness in specific cases, they do not offer significant improvement compared to vanilla fine-tuning. Further, we evaluate 5 LLMs with various adaptation paradigms and find that when sufficient ID data is available, fine-tuning domain-specific models outperform LLMs on ID examples significantly. However, in the case of OOD instances, prioritizing LLMs with in-context learning yields better results. We identify that both fine-tuned small models and LLMs face challenges in effectively addressing downstream tasks. The code is public at \url{https://github.com/lifan-yuan/OOD_NLP}.Comment: Accepted to NeurIPS 2023 Dataset and Benchmark Track. Code is available at \url{https://github.com/lifan-yuan/OOD_NLP
    • …
    corecore